H2 Database Clustering

MOTADATA H2 DB HA Clustering

High Availability name represents that server/application must be durable and likely to operate continuously without failure for a long time.

  • When failures occur, this process recovers partial or failed transactions, and restores the system to normal within a matter of microseconds. Here we will achieve this by sharing our both config db. We will replicate every changes of database in all machines which are connected as HA with master.

Technology: H2 database on cluster level by using in built clustering…

Part 1

H2 understanding & config File:

  • H2 DB is very crucial for Config database clustering.  The way of work for 3 H2 database is all 3 H2 DB runs on 3 different servers, and on all the server have same copy of the database.

  • If one server fails (power, hardware or network failure) the other server can still continue to work.

  • Let’s understand clustering mechanism using 3 server nodes. (192.168.1.121, 192.168.1.124, 192.168.1.132). Let’s say server 192.168.1.121 is master so we will change in config of 192.168.1.121 only.

Config file changes for Master:

  • Locate the file h2-conf.yml under the /motadata/motadata/config. In host: provide all host name as below, For ex: here we are setting up 3 servers for clustering then we have to provide host like:

    host: "localhost:9092,192.168.1.124:9092,192.168.1.132:9092"
    

Define target-host and provide value like:

target-host: "192.168.1.124:9092,192.168.1.132:9092"

Config file changes for Slave 1 – (IP 132):

  • Config will be same which is /motadata/moatdata/config/h2-conf.yml. In host: provide all host name as below:

    host: "localhost:9092,192.168.1.124:9092,192.168.1.121:9092"
    
  • Define target-host and provide value like:

    target-host: "192.168.1.124:9092,192.168.1.121:9092"
    

Config file changes for Slave 2 (IP 124):

  • Config will be same which is /motadata/moatdata/config/h2-conf.yml.

  • In host: provide all host name as below:

    host: "localhost:9092,192.168.1.124:9092,192.168.1.132:9092"
    
  • Define target-host and provide value like:

    target-host: "192.168.1.132:9092,192.168.1.121:9092"
    

Part 2: Exe and service

  • We have separate exe named as “motadata-ha” for ha clustering that must be available in /motadata/motadata/ location.

  • Make sure that “motadata-ha” exe has executable permission. (i.e. chmod 775 or a+x). Master Node Service:

    1. Service motadata stop (if running)

    2. Run the exe placed as commented above followed with an argument “startcluster” stated below.:

      ./motadata-ha startcluster (In Master Node)
      
      // – Here we give argument “startcluster” to apply clustering, so when we run .exe it will take argument as startcluster.
      
    3. Slave node Service:

  • Start the 2 slave server by passing below argument:

    nohup ./motadata-ha startdb & (Slave Node)
    
  • Here argument “startdb” is just to start server in normal mode, it will just start the H2 db in normal mode. That’s it h2 db clustering is working now on all of 3 server. Once we are start motadata-ha in all slave nodes, check the status of each using below command:

    ps aux | grep motadata-ha
    
  1. In master node: start the motadata service with below command:

    service motadata start
    service motadata status
    

Part:3 If master goes down than how to up slave?

If master goes down in any condition don’t hamper, we can up the slave to serve as stated below.

  1. Check the ha exe is running in slave or not, using command:

    ps aux | grep motadata-ha (must be in running stat)
    
  2. In Master node, execute below command:

    nohup ./motadata-ha startdb &
    ps aux | grep motadata-ha (must be in running stat)
    
  3. After successful step 2, start motadata service in any of slave node:

    service motadata start  service motadata status